avg 0
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Health & Medicine (1.00)
- Energy (1.00)
- Banking & Finance (0.92)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Data Science (0.70)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > Alabama (0.04)
- (2 more...)
- Health & Medicine (0.68)
- Banking & Finance (0.67)
- Energy > Renewable (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science > Data Mining (0.83)
- Information Technology > Artificial Intelligence > Natural Language (0.69)
Supplementary Material of Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding Anonymous Author(s) Affiliation Address email
The feed-forward hidden dimension of Transformers is set to 1024. AC-to-all Transformer decoders, have 2 layers. The same setup is used for both the WOMD dataset and the A V2 dataset. In the following, we report the configuration of ablation models. VRAM (RTX 3090 in our case) because they require more GPU memory at training time.
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- (2 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- Information Technology > Security & Privacy (0.67)
- Energy > Renewable (0.46)
- Energy > Power Industry (0.46)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- North America > United States > California (0.04)
- Europe > Germany (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science (0.68)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Asia > China (0.04)
- North America > United States > California (0.04)
- Government (0.45)
- Banking & Finance (0.45)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.64)
DB2-TransF: All You Need Is Learnable Daubechies Wavelets for Time Series Forecasting
Gupta, Moulik, Tripathi, Achyut Mani
Model Category Key Characteristics iTransformer [11] Transformer-based Processes each variate independently prior to multivariate fusion and is regarded as the current state-of-the-art (SOTA) in time series forecasting. PatchTST [42] Transformer-based Divides the time series into patches and applies channel-independent shared embeddings and weights for feature extraction. Crossformer [35] Transformer-based Utilizes cross-attention mechanisms to effectively capture long-range dependencies across temporal sequences.FEDformer [43] Transformer-based Improves Transformer performance by leveraging frequency-domain sparsity, typically through Fourier transforms. Autoformer [33] Transformer-based Employs a decomposition-based architecture combined with an auto-correlation mechanism for effective time series modeling. RLinear [44] Linear-based A state-of-the-art linear model that incorporates reversible normalization and assumes channel independence.TiDE [45] Linear-based An encoder-decoder architecture built entirely using multi-layer perceptrons (MLPs). DLinear [46] Linear-based Among the earliest linear models for time series forecasting, utilizing a single-layer architecture combined with series decomposition. TimesNet [28] Temporal Conv-based Employs 2D convolutional kernels (TimesBlock) to model both intra-period and inter-period variations in time series data.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (4 more...)